171 research outputs found
Recommended from our members
Third-person self-talk facilitates emotion regulation without engaging cognitive control: Converging evidence from ERP and fMRI.
Does silently talking to yourself in the third-person constitute a relatively effortless form of self control? We hypothesized that it does under the premise that third-person self-talk leads people to think about the self similar to how they think about others, which provides them with the psychological distance needed to facilitate self control. We tested this prediction by asking participants to reflect on feelings elicited by viewing aversive images (Study 1) and recalling negative autobiographical memories (Study 2) using either I or their name while measuring neural activity via ERPs (Study 1) and fMRI (Study 2). Study 1 demonstrated that third-person self-talk reduced an ERP marker of self-referential emotional reactivity (i.e., late positive potential) within the first second of viewing aversive images without enhancing an ERP marker of cognitive control (i.e., stimulus preceding negativity). Conceptually replicating these results, Study 2 demonstrated that third-person self-talk was linked with reduced levels of activation in an a priori defined fMRI marker of self-referential processing (i.e., medial prefrontal cortex) when participants reflected on negative memories without eliciting increased levels of activity in a priori defined fMRI markers of cognitive control. Together, these results suggest that third-person self-talk may constitute a relatively effortless form of self-control
MISIM: A Novel Code Similarity System
Code similarity systems are integral to a range of applications from code recommendation to automated software defect correction. We argue that code similarity is now a first-order problem that must be solved. To begin to address this, we present machine Inferred Code Similarity (MISIM), a novel end-to-end code similarity system that consists of two core components. First, MISIM uses a novel context-aware semantic structure, which is designed to aid in lifting semantic meaning from code syntax. Second, MISIM provides a neural-based code similarity scoring algorithm, which can be implemented with various neural network architectures with learned parameters. We compare MISIM to three state-of-the-art code similarity systems: (i) code2vec, (ii) Neural Code Comprehension, and (iii) Aroma. In our experimental evaluation across 328,155 programs (over 18 million lines of code), MISIM has 1.5x to 43.4x better accuracy than all three systems
A Distributed Energy-balance Melt Model of an Alpine Debris-covered Glacier
Distributed energy-balance melt models have rarely been applied to glaciers with extensive supraglacial debris cover. This paper describes the development of a distributed melt model and its application to the debris-covered Miage glacier, western Italian Alps, over two summer seasons. Sub-debris melt rates are calculated using an existing debris energy-balance model (DEB-Model), and melt rates for clean ice, snow and partially debris-covered ice are calculated using standard energy-balance equations. Simulated sub-debris melt rates compare well to ablation stake observations. Melt rates are highest, and most sensitive to air temperature, on areas of dirty, crevassed ice on the middle glacier. Here melt rates are highly spatially variable because the debris thickness and surface type varies markedly. Melt rates are lowest, and least sensitive to air temperature, beneath the thickest debris on the lower glacier. Debris delays and attenuates the melt signal compared to clean ice, with peak melt occurring later in the day with increasing debris thickness. The continuously debris-covered zone consistently provides ∼30% of total melt throughout the ablation season, with the proportion increasing during cold weather. Sensitivity experiments show that an increase in debris thickness of 0.035 m would offset 1°C of atmospheric warming
Standards for Graph Algorithm Primitives
It is our view that the state of the art in constructing a large collection
of graph algorithms in terms of linear algebraic operations is mature enough to
support the emergence of a standard set of primitive building blocks. This
paper is a position paper defining the problem and announcing our intention to
launch an open effort to define this standard.Comment: 2 pages, IEEE HPEC 201
- …